翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Generalized belief propagation : ウィキペディア英語版
Belief propagation

Belief propagation, also known as sum-product message passing, is a message passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes. Belief propagation is commonly used in artificial intelligence and information theory and has demonstrated empirical success in numerous applications including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.〔
The algorithm was first proposed by Judea Pearl in 1982,〔
〕 who formulated this algorithm on trees, and was later extended to polytrees.〔
〕 It has since been shown to be a useful approximate algorithm on general graphs.〔

If ''X''= is a set of discrete random variables with a joint mass function ''p'', the marginal distribution of a single ''X''''i'' is simply the summation of ''p'' over all other variables:
:p_(x_i) = \sum_ p(\mathbf').
However, this quickly becomes computationally prohibitive: if there are 100 binary variables, then one needs to sum over 299 ≈ 6.338 × 1029 possible values. By exploiting the polytree structure, belief propagation allows the marginals to be computed much more efficiently.
==Description of the sum-product algorithm==
Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields, in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables ''V'' and factors ''F'', with edges between variables and the factors in which they appear. We can write the joint mass function:
:p(\mathbf) = \prod_ f_a (\mathbf_a)
where x''a'' is the vector of neighbouring variable nodes to the factor node ''a''. Any Bayesian network or Markov random field can be represented as a factor graph.
The algorithm works by passing real valued functions called ''messages'' along the edges between the hidden nodes. More precisely, if ''v'' is a variable node and ''a'' is a factor node connected to ''v'' in the factor graph, the messages from ''v'' to ''a'', (denoted by \mu_) and from ''a'' to ''v'' (\mu_), are real-valued functions whose domain is Dom(''v''), the set of values that can be taken by the random variable associated with ''v''. These messages contain the "influence" that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation:
* A message from a variable node ''v'' to a factor node ''a'' is the product of the messages from all other neighbouring factor nodes (except the recipient; alternatively one can say the recipient sends as message the constant function equal to "1"):
::\forall x_v\in Dom(v),\; \mu_ (x_v) = \prod_ \mu_ (x_v).
:where ''N''(''v'') is the set of neighbouring (factor) nodes to ''v''. If N(v)\setminus\ is empty, then \mu_(x_v) is set to the uniform distribution.
* A message from a factor node ''a'' to a variable node ''v'' is the product of the factor with messages from all other nodes, marginalised over all variables except the one associated with ''v'':
::\forall x_v\in Dom(v),\; \mu_ (x_v) = \sum_ f_a (\mathbf'_a) \prod_ (x'_).
:where ''N''(''a'') is the set of neighbouring (variable) nodes to ''a''. If N(a) \setminus \ is empty then \mu_ (x_v) = f_a(x_v), since in this case x_v = x_a .
As shown by the previous formula: the complete marginalisation is reduced to a sum of products of simpler terms than the ones appearing in the full joint distribution. This is the reason why it is called the sum-product algorithm.
In a typical run, each message will be updated iteratively from the previous value of the neighbouring messages. Different scheduling can be used for updating the messages. In the case where the graphical model is a tree, an optimal scheduling allows to reach convergence after computing each messages only once (see next sub-section). When the factor graph has cycles, such an optimal scheduling does not exist, and a typical choice is to update all messages simultaneously at each iteration.
Upon convergence (if convergence happened), the estimated marginal distribution of each node is proportional to the product of all messages from adjoining factors (missing the normalization constant):
: p_ (x_v) \propto \prod_ \mu_ (x_v).
Likewise, the estimated joint marginal distribution of the set of variables belonging to one factor is proportional to the product of the factor and the messages from the variables:
: p_ (\mathbf_a) \propto f_a(\mathbf_a) \prod_ \mu_ (x_v).
In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated marginal actually converge to the true marginals in a finite number of iterations. This can be shown by mathematical induction.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Belief propagation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.